4 research outputs found

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units

    Deep Learning Based Superconducting Radio-Frequency Cavity Fault Classification at Jefferson Laboratory

    Get PDF
    This work investigates the efficacy of deep learning (DL) for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a large, high-power continuous wave recirculating linac that utilizes 418 SRF cavities to accelerate electrons up to 12 GeV. Recent upgrades to CEBAF include installation of 11 new cryomodules (88 cavities) equipped with a low-level RF system that records RF time-series data from each cavity at the onset of an RF failure. Typically, subject matter experts (SME) analyze this data to determine the fault type and identify the cavity of origin. This information is subsequently utilized to identify failure trends and to implement corrective measures on the offending cavity. Manual inspection of large-scale, time-series data, generated by frequent system failures is tedious and time consuming, and thereby motivates the use of machine learning (ML) to automate the task. This study extends work on a previously developed system based on traditional ML methods (Tennant and Carpenter and Powers and Shabalina Solopova and Vidyaratne and Iftekharuddin, Phys. Rev. Accel. Beams, 2020, 23, 114601), and investigates the effectiveness of deep learning approaches. The transition to a DL model is driven by the goal of developing a system with sufficiently fast inference that it could be used to predict a fault event and take actionable information before the onset (on the order of a few hundred milliseconds). Because features are learned, rather than explicitly computed, DL offers a potential advantage over traditional ML. Specifically, two seminal DL architecture types are explored: deep recurrent neural networks (RNN) and deep convolutional neural networks (CNN). We provide a detailed analysis on the performance of individual models using an RF waveform dataset built from past operational runs of CEBAF. In particular, the performance of RNN models incorporating long short-term memory (LSTM) are analyzed along with the CNN performance. Furthermore, comparing these DL models with a state-of-the-art fault ML model shows that DL architectures obtain similar performance for cavity identification, do not perform quite as well for fault classification, but provide an advantage in inference speed

    Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

    No full text
    Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset
    corecore